54 research outputs found
Approximation of Eigenfunctions in Kernel-based Spaces
Kernel-based methods in Numerical Analysis have the advantage of yielding
optimal recovery processes in the "native" Hilbert space \calh in which they
are reproducing. Continuous kernels on compact domains have an expansion into
eigenfunctions that are both -orthonormal and orthogonal in \calh
(Mercer expansion). This paper examines the corresponding eigenspaces and
proves that they have optimality properties among all other subspaces of
\calh. These results have strong connections to -widths in Approximation
Theory, and they establish that errors of optimal approximations are closely
related to the decay of the eigenvalues.
Though the eigenspaces and eigenvalues are not readily available, they can be
well approximated using the standard -dimensional subspaces spanned by
translates of the kernel with respect to nodes or centers. We give error
bounds for the numerical approximation of the eigensystem via such subspaces. A
series of examples shows that our numerical technique via a greedy point
selection strategy allows to calculate the eigensystems with good accuracy
Approximation Theory XV: San Antonio 2016
These proceedings are based on papers presented at the international conference Approximation Theory XV, which was held May 22\u201325, 2016 in San Antonio, Texas. The conference was the fifteenth in a series of meetings in Approximation Theory held at various locations in the United States, and was attended by 146 participants. The book contains longer survey papers by some of the invited speakers covering topics such as compressive sensing, isogeometric analysis, and scaling limits of polynomials and entire functions of exponential type.
The book also includes papers on a variety of current topics in Approximation Theory drawn from areas such as advances in kernel approximation with applications, approximation theory and algebraic geometry, multivariate splines for applications, practical function approximation, approximation of PDEs, wavelets and framelets with applications, approximation theory in signal processing, compressive sensing, rational interpolation, spline approximation in isogeometric analysis, approximation of fractional differential equations, numerical integration formulas, and trigonometric polynomial approximation
Kernel Methods for Surrogate Modeling
This chapter deals with kernel methods as a special class of techniques for
surrogate modeling. Kernel methods have proven to be efficient in machine
learning, pattern recognition and signal analysis due to their flexibility,
excellent experimental performance and elegant functional analytic background.
These data-based techniques provide so called kernel expansions, i.e., linear
combinations of kernel functions which are generated from given input-output
point samples that may be arbitrarily scattered. In particular, these
techniques are meshless, do not require or depend on a grid, hence are less
prone to the curse of dimensionality, even for high-dimensional problems.
In contrast to projection-based model reduction, we do not necessarily assume
a high-dimensional model, but a general function that models input-output
behavior within some simulation context. This could be some micro-model in a
multiscale-simulation, some submodel in a coupled system, some initialization
function for solvers, coefficient function in PDEs, etc.
First, kernel surrogates can be useful if the input-output function is
expensive to evaluate, e.g. is a result of a finite element simulation. Here,
acceleration can be obtained by sparse kernel expansions. Second, if a function
is available only via measurements or a few function evaluation samples, kernel
approximation techniques can provide function surrogates that allow global
evaluation.
We present some important kernel approximation techniques, which are kernel
interpolation, greedy kernel approximation and support vector regression.
Pseudo-code is provided for ease of reproducibility. In order to illustrate the
main features, commonalities and differences, we compare these techniques on a
real-world application. The experiments clearly indicate the enormous
acceleration potentia
RBF approximation of large datasets by partition of unity and local stabilization
We present an algorithm to approximate large dataset by Radial Basis Function
(RBF) techniques. The method couples a fast domain decomposition procedure with a
localized stabilization method. The resulting algorithm can efficiently deal with large
problems and it is robust with respect to the typical instability of kernel methods
Analysis of target data-dependent greedy kernel algorithms: Convergence rates for -, - and -greedy
Data-dependent greedy algorithms in kernel spaces are known to provide fast
converging interpolants, while being extremely easy to implement and efficient
to run. Despite this experimental evidence, no detailed theory has yet been
presented. This situation is unsatisfactory especially when compared to the
case of the data-independent -greedy algorithm, for which optimal
convergence rates are available, despite its performances being usually
inferior to the ones of target data-dependent algorithms.
In this work we fill this gap by first defining a new scale of greedy
algorithms for interpolation that comprises all the existing ones in a unique
analysis, where the degree of dependency of the selection criterion on the
functional data is quantified by a real parameter. We then prove new
convergence rates where this degree is taken into account and we show that,
possibly up to a logarithmic factor, target data-dependent selection strategies
provide faster convergence.
In particular, for the first time we obtain convergence rates for target data
adaptive interpolation that are faster than the ones given by uniform points,
without the need of any special assumption on the target function. The rates
are confirmed by a number of examples.
These results are made possible by a new analysis of greedy algorithms in
general Hilbert spaces
Kernel-Based Models for Influence Maximization on Graphs based on Gaussian Process Variance Minimization
The inference of novel knowledge, the discovery of hidden patterns, and the
uncovering of insights from large amounts of data from a multitude of sources
make Data Science (DS) to an art rather than just a mere scientific discipline.
The study and design of mathematical models able to analyze information
represents a central research topic in DS. In this work, we introduce and
investigate a novel model for influence maximization (IM) on graphs using ideas
from kernel-based approximation, Gaussian process regression, and the
minimization of a corresponding variance term. Data-driven approaches can be
applied to determine proper kernels for this IM model and machine learning
methodologies are adopted to tune the model parameters. Compared to stochastic
models in this field that rely on costly Monte-Carlo simulations, our model
allows for a simple and cost-efficient update strategy to compute optimal
influencing nodes on a graph. In several numerical experiments, we show the
properties and benefits of this new model
- …